107 research outputs found

    CORTICAL DYNAMICS OF AUDITORY-VISUAL SPEECH: A FORWARD MODEL OF MULTISENSORY INTEGRATION.

    Get PDF
    In noisy settings, seeing the interlocutor's face helps to disambiguate what is being said. For this to happen, the brain must integrate auditory and visual information. Three major problems are (1) bringing together separate sensory streams of information, (2) extracting auditory and visual speech information, and (3) identifying this information as a unified auditory-visual percept. In this dissertation, a new representational framework for auditory visual (AV) speech integration is offered. The experimental work (psychophysics and electrophysiology (EEG)) suggests specific neural mechanisms for solving problems (1), (2), and (3) that are consistent with a (forward) 'analysis-by-synthesis' view of AV speech integration. In Chapter I, multisensory perception and integration are reviewed. A unified conceptual framework serves as background for the study of AV speech integration. In Chapter II, psychophysics testing the perception of desynchronized AV speech inputs show the existence of a ~250ms temporal window of integration in AV speech integration. In Chapter III, an EEG study shows that visual speech modulates early on the neural processing of auditory speech. Two functionally independent modulations are (i) a ~250ms amplitude reduction of auditory evoked potentials (AEPs) and (ii) a systematic temporal facilitation of the same AEPs as a function of the saliency of visual speech. In Chapter IV, an EEG study of desynchronized AV speech inputs shows that (i) fine-grained (gamma, ~25ms) and (ii) coarse-grained (theta, ~250ms) neural mechanisms simultaneously mediate the processing of AV speech. In Chapter V, a new illusory effect is proposed, where non-speech visual signals modify the perceptual quality of auditory objects. EEG results show very different patterns of activation as compared to those observed in AV speech integration. An MEG experiment is subsequently proposed to test hypotheses on the origins of these differences. In Chapter VI, the 'analysis-by-synthesis' model of AV speech integration is contrasted with major speech theories. From a Cognitive Neuroscience perspective, the 'analysis-by-synthesis' model is argued to offer the most sensible representational system for AV speech integration. This thesis shows that AV speech integration results from both the statistical nature of stimulation and the inherent predictive capabilities of the nervous system

    Learning-induced modulation of scale-free properties of brain activity measured with MEG

    Get PDF
    International audiencePrevious studies have suggested that infraslow brain activity could play an important role in cognition. Its scale-free properties (coarsely described by its 1/f power spectrum) are indeed modulated between contrasted conscious states (sleep vs. awake). However, finer modulations remain to be investigated. Here, we make use of a robust multifractal analysis to investigate the group-level impact of perceptual learning (visual (V), or audiovisual (AV), N=12 subjects in each group) on the source reconstructed scale-free activity recorded with magnetoencephalography (MEG) during rest and task. We first observed a significant decrease of self-similarity in evoked activity during the task after both trainings. More interestingly, only the most efficient training (AV) induced a decrease of self-similarity in spontaneous activity at rest whereas only V training induced an increase of multifractality in evoked activity

    La convergence de l'activité neurale vers des attracteurs multifractals localisés prédit la capacité d'apprentissage

    Get PDF
    National audienceDans cet article, nous mettons en évidence les propriétés d'invariance d'échelle (auto-similarité H, multifractalité M) des signaux cérébraux acquis par magnétoencephalographie (MEG) et nous démontrons leur pertinence fonctionnelle dans une tâche complexe de discrimination visuelle en contrastant les situations avant et après apprentissage. L'analyse de ces invariances démontre que le cerveau peut adopter deux stratégies complémentaires pour apprendre efficacement: soit réduire H dans les aires associatives sous-tendant la plasticité neurale, soit faire converger M vers un attracteur asymptotique dans ces mêmes aires. Abstract – In this paper, we provide evidence for scaling properties (self-similarity H, multifractality: M) in the human brain based on brain activity recorded with magnetoencephalography (MEG). We demonstrate the functional relevance of scaling properties during the learning of complex visual discrimination by contrasting before and after training. The analysis of scale-free dynamics show two complementary strategies for efficient learning and plasticity, namely: in those regions showing plasticity, we report a decrease of H and a convergence of M towards asymptotic values

    MODULATION OF SCALE-FREE PROPERTIES OF BRAIN ACTIVITY IN MEG

    Get PDF
    International audienceThe analysis of scale-free (i.e., 1/f power spectrum) brain activity has emerged in the last decade since it has been shown that low frequency fluctuations interact with oscillatory activity in electrophysiology, noticeably when exogenous factors (stimuli, task) are delivered to the human brain. However, there are some major difficulties in measuring scale-free activity in neuroimaging data: they are noisy, possibly nonstationary ... Here, we make use of multifractal analysis to better understand the biological meaning of scale-free activity recorded with Magnetoencephalography (MEG) data. On a cohort of 20 subjects, we demonstrate the presence of self-similarity on all sensors during rest and visually evoked activity. Also, we report significant multifractality on the norm of gradiometers. Finally, on the latter signals we show how self-similarity and multifractality are modulated between ongoing and evoked activity

    Hierarchically nested networks optimize the analysis of audiovisual speech

    Get PDF
    In conversational settings, seeing the speaker’s face elicits internal predictions about the upcoming acoustic utterance. Understanding how the listener’s cortical dynamics tune to the temporal statistics of audiovisual (AV) speech is thus essential. Using magnetoencephalography, we explored how large-scale frequency-specific dynamics of human brain activity adapt to AV speech delays. First, we show that the amplitude of phase-locked responses parametrically decreases with natural AV speech synchrony, a pattern that is consistent with predictive coding. Second, we show that the temporal statistics of AV speech affect large-scale oscillatory networks at multiple spatial and temporal resolutions. We demonstrate a spatial nestedness of oscillatory networks during the processing of AV speech: these oscillatory hierarchies are such that high-frequency activity (beta, gamma) is contingent on the phase response of low-frequency (delta, theta) networks. Our findings suggest that the endogenous temporal multiplexing of speech processing confers adaptability within the temporal regimes that are essential for speech comprehension

    Auditory Cortical Plasticity in Learning to Discriminate Modulation Rate

    Get PDF
    The discrimination of temporal information in acoustic inputs is a crucial aspect of auditory perception, yet very few studies have focused on auditory perceptual learning of timing properties and associated plasticity in adult auditory cortex. Here, we trained participants on a temporal discrimination task. The main task used a base stimulus (four tones separated by intervals of 200 ms) that had to be distinguished from a target stimulus (four tones with intervals down to ∼180 ms). We show that participants' auditory temporal sensitivity improves with a short amount of training (3 d, 1 h/d). Learning to discriminate temporal modulation rates was accompanied by a systematic amplitude increase of the early auditory evoked responses to trained stimuli, as measured by magnetoencephalography. Additionally, learning and auditory cortex plasticity partially generalized to interval discrimination but not to frequency discrimination. Auditory cortex plasticity associated with short-term perceptual learning was manifested as an enhancement of auditory cortical responses to trained acoustic features only in the trained task. Plasticity was also manifested as induced non-phase–locked high gamma-band power increases in inferior frontal cortex during performance in the trained task. Functional plasticity in auditory cortex is here interpreted as the product of bottom-up and top-down modulations

    The Neural Substrates of Subjective Time Dilation

    Get PDF
    An object moving towards an observer is subjectively perceived as longer in duration than the same object that is static or moving away. This ”time dilation effect” has been shown for a number of stimuli that differ from standard events along different feature dimensions (e.g. color, size, and dynamics). We performed an event-related functional magnetic resonance imaging (fMRI) study, while subjects viewed a stream of five visual events, all of which were static and of identical duration except the fourth one, which was a deviant target consisting of either a looming or a receding disc. The duration of the target was systematically varied and participants judged whether the target was shorter or longer than all other events. A time dilation effect was observed only for looming targets. Relative to the static standards, the looming as well as the receding targets induced increased activation of the anterior insula and anterior cingulate cortices (the ”core control network”). The decisive contrast between looming and receding targets representing the time dilation effect showed strong asymmetric activation and, specifically, activation of cortical midline structures (the ”default network”). These results provide the first evidence that the illusion of temporal dilation is due to activation of areas that are important for cognitive control and subjective awareness. The involvement of midline structures in the temporal dilation illusion is interpreted as evidence that time perception is related to self-referential processing

    Decoding perceptual thresholds from MEG/EEG

    Get PDF
    International audienceMagnetoencephalography (MEG) can map brain activity by recording the electromagnetic fields generated by the electrical currents in the brain during a perceptual or cognitive task. This technique offers a very high temporal resolution that allows noninvasive brain exploration at a millisecond (ms) time scale. Decoding, a.k.a. brain reading, consists in predicting from neuroimaging data the subject's behavior and/or the parameters of the perceived stimuli. This is facilitated by the use of supervised learning techniques. In this work we consider the problem of decoding a target variable with ordered values. This target reflects the use of a parametric experimental design in which a parameter of the stimulus is continuously modulated during the experiment. The decoding step is performed by a Ridge regression. The evaluation metric, given the ordinal nature of the target is performed by a ranking metric. On a visual paradigm consisting of random dot kinematograms with 7 coherence levels recorded on 36 subjects we show that one can predict the perceptual thresholds of the subjects from the MEG data. Results are obtained in sensor space and for source estimates in relevant regions of interests (MT, pSTS, mSTS, VLPFC)

    Multivariate Hurst Exponent Estimation in FMRI. Application to Brain Decoding of Perceptual Learning

    Get PDF
    International audienceSo far considered as noise in neuroscience, irregular arrhyth-mic field potential activity accounts for the majority of the signal power recorded in EEG or MEG [1, 2]. This brain activity follows a power law spectrum P (f) ∼ 1/f β in the limit of low frequencies, which is a hallmark of scale invariance. Recently, several studies [1, 3–6] have shown that the slope β (or equivalently Hurst exponent H) tends to be modulated by task performance or cognitive state (eg, sleep vs awake). These observations were confirmed in fMRI [7–9] although the short length of fMRI time series makes these findings less reliable. In this paper, to compensate for the slower sampling rate in fMRI, we extend univariate wavelet-based Hurst exponent estimator to a multivariate setting using spatial regular-ization. Next, we demonstrate the relevance of the proposed tools on resting-state fMRI data recorded in three groups of individuals once they were specifically trained to a visual discrimination task during a MEG experiment [10]. In a supervised classification framework, our multivariate approach permits to better predict the type of training the participants received as compared to their univariate counterpart

    Distortions of Subjective Time Perception Within and Across Senses

    Get PDF
    Background: The ability to estimate the passage of time is of fundamental importance for perceptual and cognitive processes. One experience of time is the perception of duration, which is not isomorphic to physical duration and can be distorted by a number of factors. Yet, the critical features generating these perceptual shifts in subjective duration are not understood. Methodology/Findings: We used prospective duration judgments within and across sensory modalities to examine the effect of stimulus predictability and feature change on the perception of duration. First, we found robust distortions of perceived duration in auditory, visual and auditory-visual presentations despite the predictability of the feature changes in the stimuli. For example, a looming disc embedded in a series of steady discs led to time dilation, whereas a steady disc embedded in a series of looming discs led to time compression. Second, we addressed whether visual (auditory) inputs could alter the perception of duration of auditory (visual) inputs. When participants were presented with incongruent audio-visual stimuli, the perceived duration of auditory events could be shortened or lengthened by the presence of conflicting visual information; however, the perceived duration of visual events was seldom distorted by the presence of auditory information and was never perceived shorter than their actual durations. Conclusions/Significance: These results support the existence of multisensory interactions in the perception of duration and, importantly, suggest that vision can modify auditory temporal perception in a pure timing task. Insofar as distortions in subjective duration can neither be accounted for by the unpredictability of an auditory, visual or auditory-visual event, we propose that it is the intrinsic features of the stimulus that critically affect subjective time distortions
    corecore